- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0000000003000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Villarreal, Ruben (3)
-
Alivio, Theodore E. (1)
-
Arroyave, Raymundo (1)
-
Arróyave, Raymundo (1)
-
Banerjee, Sarbajit (1)
-
Braham, Erick J. (1)
-
Brown, Timothy D. (1)
-
Catanach, Tommie A. (1)
-
Clarke, Heidi (1)
-
De Jesus, Luis R. (1)
-
Jones, Reese E. (1)
-
Kramer, Sharlotte L. (1)
-
Miao, Yucong (1)
-
Parija, Abhishek (1)
-
Phan, Nhon N. (1)
-
Prendergast, David (1)
-
Qian, Xiaofeng (1)
-
Sellers, Diane G. (1)
-
Shamberger, Patrick J. (1)
-
Sun, WaiChing (1)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Experimental data are often costly to obtain, which makes it difficult to calibrate complex models. For many models an experimental design that produces the best calibration given a limited experimental budget is not obvious. This paper introduces a deep reinforcement learning (RL) algorithm for design of experiments that maximizes the information gain measured by Kullback–Leibler divergence obtained via the Kalman filter (KF). This combination enables experimental design for rapid online experiments where manual trial-and-error is not feasible in the high-dimensional parametric design space. We formulate possible configurations of experiments as a decision tree and a Markov decision process, where a finite choice of actions is available at each incremental step. Once an action is taken, a variety of measurements are used to update the state of the experiment. This new data leads to a Bayesian update of the parameters by the KF, which is used to enhance the state representation. In contrast to the Nash–Sutcliffe efficiency index, which requires additional sampling to test hypotheses for forward predictions, the KF can lower the cost of experiments by directly estimating the values of new data acquired through additional actions. In this work our applications focus on mechanical testing of materials. Numerical experiments with complex, history-dependent models are used to verify the implementation and benchmark the performance of the RL-designed experiments.more » « less
-
Miao, Yucong; Villarreal, Ruben; Talapatra, Anjana; Arróyave, Raymundo; Vlassak, Joost J. (, Acta Materialia)
-
Sellers, Diane G.; Braham, Erick J.; Villarreal, Ruben; Zhang, Baiyu; Parija, Abhishek; Brown, Timothy D.; Alivio, Theodore E.; Clarke, Heidi; De Jesus, Luis R.; Zuin, Lucia; et al (, Journal of the American Chemical Society)null (Ed.)
An official website of the United States government
